-
Notifications
You must be signed in to change notification settings - Fork 1.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Allow social event exchange anonymous between clients, MQTT broker will hide users' ip and personal information. #3400
Conversation
…ll hide users' ip and personal information.
the code is using publish test MQTT broker for testing: |
|
from pokemongo_bot.event_manager import EventHandler | ||
import thread | ||
import paho.mqtt.client as mqtt | ||
class MyMQTTClass: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Reference from:
https://eclipse.googlesource.com/paho/org.eclipse.paho.mqtt.python/+/1.1/examples/sub-class.py
And username/password for deployment:
http://www.hivemq.com/blog/mqtt-security-fundamentals-authentication-username-password
How will this be better for everyone? |
It is a neat idea to have the bot be able to report the seen pokemon to each other. However, I'm not sure this is the right approach to implement it. I imagine the value propositions for something like this are:
I don't think a queue is the right approach for this. If clients want to search for specific pokemon to snipe, they wouldn't be able to do so. They would have to process each message that the queue receives. If the central server is getting every pokemon, then the local clients will be spending forever iterating through that queue trying to find the ones they care about. I think a queue will cause any bot using this to hit 100% cpu utilization very quickly with the number of bots we have running. I think Instead of a event handler, I think this should be written as tasks. Either one or two tasks If one task: It would be similar to MoveToMapPokemon, but instead of connecting to an instance of a map, it would connect to a central server that it can use to search for specific pokemon. As part of using this task it could be a good citizen and report back the pokemon visible for the bot. If two tasks: One task that is written to send all the visible pokemon to the central server, and another to snipe pokemon from the list. Either way, it seems like this is a great thing that should start as a plugin in a different repo. We don't risk our user's privacy, it is very clear that it is opt in, and we wouldn't have to be so worried about the central server. @douglascamata Do you have any other additional thoughts? |
@TheSavior aah this explains it way better, than i can see the value. but if its filtered on only certain pokemon for sniping and maybe a max range? Thank you for the explanation |
def publish(self, channel, message): | ||
self._mqttc.publish(channel, message) | ||
def connect_to_mqtt(self): | ||
self._mqttc.connect("test.mosca.io", 1883, 60) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this should be in the config file.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yep, welcome to contribute on the branch.
@TheSavior you are right.
|
We need to discuss a lot more about how will the server process everything to be able to choose a message broker. IMHO, whatever sends data to the server needs to be an async event handler. Why event handler? Because we already have pokemon encounter events. They have pokemon name, id, expiration timestamp, lat and lng (and I probably can include IV and CP), which is all the information we need for a crowdsourced map. For the message broker, I believe we should use RabbitMQ and JSON as we already do. We enqueue data and have many "workers" in the server dumping them to a database with geospatial capabilities and very fast write speeds, like MongoDB. Then we can use a microframework, like Flask or Hug (hug is awesome, thx @JSchwerberg), to serve a simple API for snipping and to tell users what place is best to farm which pokemon. We can also release a dump of our database every month, for example, to allow anyone to do geospatial statistical analysis in our data. What you guys think? |
@douglascamata I totally agree with that. One thing I am very concerned about is that this current implementation doesn't just write the pokemon to a queue, but all of the bots are writing to the same queue and reading from the same queue. So every bot has to process all of the sightings from every other bot. If we are reporting to some central server, that central server needs to be a bit smarter, just as you are proposing. I would prefer us to partner with some other existing project that is handling snipe data so that we don't have to build and manage that ourselves. We'd also hopefully avoid having to pay for hosting. I think releasing the dataset to the community would be really valuable and help promote community goodwill. |
@TheSavior you misunderstood something. We would put found pokemon in a queue (RabbitMQ) and query the API (flask/hug/webserver) for pokemon to snipe. We won't be listening for everyone's published encounters. We listen for the encounter that we request. For example, if you want to snipe Dratini, you will tell the server and it won't notify you about anything else (unless you request it). |
Exactly, I agree with that implementation. I just wanted to make it clear that that is not what this PR is doing |
@TheSavior The design is same as you discussed, the implementation is in progress, my knowledge is not enough to finish all the pieces. That's why I post this branch and PR for discussing and contribution. Doesn't mean just what done in the code. |
+1 on #3400 (comment) My two cents: The way I envision it, we would have the individual bots sending their encounter data along with current Lat/Long to either ZeroMQ or RabbitMQ. There is a consumer -- likely a celery worker -- subscribed to that message queue; it handles the objects and drops the data into Couchbase/Mongo (both of which already do geospatial lookups in their default implementations. We can use Marshmallow to model out the objects, possibly even implementing it into the bot as well as the consumer and API, to guarantee data consistency across the document store; best part is that if we're using hug for the API, it has special handling for Marshmallow schemas. It would be a huge undertaking, but using something like Marshmallow to enforce data consistency -- and to make our objects portable between the bots, the backend servers, the API, etc -- will make this really easy to scale and extend going forward. |
@JSchwerberg that sounds right to me. However, there are many sites right now that already aggregate this data and provide a nice api for fetching live sniping data. One example is this: http://pokesnipers.com/ These websites already have all that infrastructure set up, they are paying for the servers, and they are handling the scaling. I would really like to not need for us to be building that ourselves so we can focus on the bot. If these sites didn't exist and weren't already widely used by other bots as well for sniping data then we'd be in a situation. I don't feel like we would be providing any additional value by reinventing the wheel here. |
Closing this in favor of: #3672 |
Short Description:
Push the kinds of sample code in for discussion or contribution.
Exchange pokemon go information will help everyone to have better experiment.